Big O notation typically refers to a notation that represents how quickly an algorithm scales with the increase of the number of elements it operates on.
int[] nums = new int[n];
for(int i = 0; i < n; i++){
nums[i] += 1;
}
Would have a big O notation of
int[] nums = new int[n];
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++){
nums[j] += 1;
}
}
Would have a big O notation of
Big O notation is not exact. It measures the worst term of the growth curve and discards all numeric multipliers.
The following are some examples of algorithms who's growth curves are different from their big O notations.
int[] nums = new int[n];
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++){
nums[j] += 1;
}
}
for(int i = 0; i < n; i++){
nums[j] += 1;
}
In this case the exact curve can be described as
int[] nums = new int[n];
for(int i = 0; i < n; i++){
nums[i] += 1;
}
for(int i = 0; i < n; i++){
nums[i] += 1;
}
int[] nums = new int[n];
for(int i = 0; i < n; i++){
nums[i] += 1;
nums[i] += 1;
nums[i] += 1;
}
In the case of the algorithm D and E the growth curves are
Any multiplicative constants should be discarded when writing Big O notation. The catch is this also incudes things like bases of logarithms.
So for instance the big O of
Below is and example of the 2 logarithms graphed and it's clearly visible that one is just a scaled up version of the other
left=-3; right=100;
bottom=-3;
---
y=\log{x}|dashed|green|x>1
y=(\log{x})/(\log{2})|dashed|red|x>1
Big O notation studies how algorithms grow when the amount of elements they operate on (n) grows.
Typical Big O examples include:
Big O is simplified so constant factors are discarded and only the biggest term is considered
Worst and best case scenarios refer to changes in the algorithm growth curve depending on how the algorithm end up being run.
With the algorithms we've seen so far the algorithm can only be run one way. It doesn't depend on the parameters given or on the values of the elements given.
Some algorithms however will depend on external parameters and thus must be analyzed with worst and best case scenarios.
Worst and best case scenarios don't refer to changing the amount of elements the algorithm runs on (n). n always remains as an unknown term and Big O is always expressed as a function of n.
Instead what changes between the worst and best case scenarios are things like values of external parameters that influence the way the algorithm runs.
An example of an algorithm that would have different best and worst case scenarios could be:
int[] nums = new int[n];
for(int i = 0; i < n; i++){
if(nums[i] % 2 == 0){// if number even
//add 1 to all
for(int j = 0; j < n; j++){
nums[j] += 1;
}
}
}
This algorithm runs through all the numbers given and adds 1 to all for each even number found.
The best and worst case scenarios would differ since if all the numbers given are odd the algorithm would only run through the elements once. However if all the numbers are even the algorithm would run through all elements for every number given.
Thus the big O notation in the best case scenario would be
#todo Add section about different notations of different algorithms and talk about sorting algos
Sorting Algorithms have been proven to be of
When talking about sorting algorithms we generally consider the worst or average case scenarios. That's because that for almost any sorting algorithm the best case scenario is that the array is already sorted which brings the time complexity down to
This divides sorting algorithms in 2 types:
Fast sorting algorithms are those that have their worst case time complexity as
Fast sorting algorithms are those that have their worst case time complexity as
Common examples include:
This distinction is rather broad than strict and isn't used "officially". The algorithms from these groups can differ amongst each other in average time complexities or outperform others under specific circumstances.